Location Reparameterization and Default Priors for Statistical Analysis

نویسندگان

  • D. A. S. Fraser
  • Grace Yun Yi
چکیده مقاله:

This paper develops default priors for Bayesian analysis that reproduce familiar frequentist and Bayesian analyses for models that are exponential or location. For the vector parameter case there is an information adjustment that avoids the Bayesian marginalization paradoxes and properly targets the prior on the parameter of interest thus adjusting for any complicating nonlinearity the details of this vector Bayesian issue will be investigated in detail elsewhere. As in wide generality a statistical model has an inference component structure that is approximately exponential or approximately location to third order, this provides general default prior procedures that can be described as reweighting likelihood in accord with a Jeffreys’ prior based on observed information. Two asymptotic models, that have variable and parameter of the same dimension and agree at a data point to first derivative conditional on an approximate ancillary, produce the same p-values to third order for inferences concerning scalar interest parameters. With some given model of interest there is then the opportunity to choose some second model to best assist the calculations or best achieve certain inference objectives. Exponential models are useful for obtaining accurate approximations while location models present possible parameter values in a direct measurement or location manner. We derive the general construction of the location reparameterization that gives the natural parameter of the location model coinciding with the given model to first derivative at a data point the derivation is in algorithmic form that is suitable for computer algebra. We then define a general default prior based on this location reparameterization this gives third order agreement between frequentist p-values and Bayesian survivor values in the vector case however, an adjustment factor is needed for component parameters that are not linear in the location parameterization. The general default prior can be difficult to calculate. But if we choose to work only to the secondorder, a Jeffreys’ prior based on the observed information function gives second order agreement between the frequentist p-values and Bayesian survivor values again adjustments are needed for parameters nonlinear in the vector location parameter the adjustment is a ratio of two nuisance information determinants, one for the nuisance parameter as given and one for the locally equivalent linear nuisance parameter.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On default priors and approximate location models

A prior for statistical inference can be one of three basic types: a mathematical prior originally proposed in Bayes [Philos. Trans. R. Soc. Lond. 53 (1763) 370–418; 54 (1764) 269–325], a subjective prior presenting an opinion, or a truly objective prior based on an identified frequency reference. In this note we consider a method for deriving a mathematical prior based on approximate location ...

متن کامل

Default Priors for Gaussian Processes

Motivated by the statistical evaluation of complex computer models, we deal with the issue of objective prior specification for the parameters of Gaussian processes. In particular, we derive the Jeffreys-rule, independence Jeffreys and reference priors for this situation, and prove that the resulting posterior distributions are proper under a quite general set of conditions. A proper flat prior...

متن کامل

Default priors for Bayesian and frequentist inference

We investigate the choice of default priors for use with likelihood for Bayesian and frequentist inference. Such a prior is a density or relative density that weights an observed likelihood function, leading to the elimination of parameters that are not of interest and then a density-type assessment for a parameter of interest. For independent responses from a continuous model, we develop a pri...

متن کامل

Default Priors for Neural Network Classification

Feedforward neural networks are a popular tool for classification, offering a method for fully flexible modeling. This paper looks at the underlying probability model, so as to understand statistically what is going on in order to facilitate an intelligent choice of prior for a fully Bayesian analysis. The parameters turn out to be difficult or impossible to interpret, and yet a coherent prior ...

متن کامل

Statistical Foundations for Default Reasoning

We describe a new approach to default reason ing based on a principle of indi erence among possible worlds We interpret default rules as extreme statistical statements thus obtaining a knowledge base KB comprised of statistical and rst order statements We then assign equal probability to all worlds consistent with KB in order to assign a degree of belief to a statement The degree of belief can ...

متن کامل

منابع من

با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ذخیره در منابع من قبلا به منابع من ذحیره شده

{@ msg_add @}


عنوان ژورنال

دوره 1  شماره None

صفحات  55- 78

تاریخ انتشار 2002-11

با دنبال کردن یک ژورنال هنگامی که شماره جدید این ژورنال منتشر می شود به شما از طریق ایمیل اطلاع داده می شود.

کلمات کلیدی

میزبانی شده توسط پلتفرم ابری doprax.com

copyright © 2015-2023